314 research outputs found

    Higher-Order Process Modeling: Product-Lining, Variability Modeling and Beyond

    Full text link
    We present a graphical and dynamic framework for binding and execution of business) process models. It is tailored to integrate 1) ad hoc processes modeled graphically, 2) third party services discovered in the (Inter)net, and 3) (dynamically) synthesized process chains that solve situation-specific tasks, with the synthesis taking place not only at design time, but also at runtime. Key to our approach is the introduction of type-safe stacked second-order execution contexts that allow for higher-order process modeling. Tamed by our underlying strict service-oriented notion of abstraction, this approach is tailored also to be used by application experts with little technical knowledge: users can select, modify, construct and then pass (component) processes during process execution as if they were data. We illustrate the impact and essence of our framework along a concrete, realistic (business) process modeling scenario: the development of Springer's browser-based Online Conference Service (OCS). The most advanced feature of our new framework allows one to combine online synthesis with the integration of the synthesized process into the running application. This ability leads to a particularly flexible way of implementing self-adaption, and to a particularly concise and powerful way of achieving variability not only at design time, but also at runtime.Comment: In Proceedings Festschrift for Dave Schmidt, arXiv:1309.455

    Service-Oriented Design: The jABC Approach

    Get PDF
    Reviewing our 10 years of experience in service engineering for telecommunication systems from the point of view of Service-Oriented Design then and now, we observe that much is common to the two communities. We aim in our current research at establishing a link to the notions used by the service-oriented programming (SO) community. We are convinced that combined approaches, that blend the flexibility of the current SO-scenario with the rigour and semantic standardization culture of the telecommunication community will dramatically increase the productivity of the development of a large class of software systems. Incremental formalization and automatic verification techniques may be again the key to achieving confidence and reliability for services that interact and interoperate on a large distributed scale

    Low-Code/No-Code Artificial Intelligence Platforms for the Health Informatics Domain

    Get PDF
    In the contemporary health informatics space, Artificial Intelligence (AI) has become a necessity for the extraction of actionable knowledge in a timely manner. Low-code/No-Code (LCNC) AI Platforms enable domain experts to leverage the value that AI has to offer by lowering the technical skills overhead. We develop domain-specific, service-orientated platforms in the context of two subdomains of health informatics. We address in this work the core principles and the architectures of these platforms whose functionality we are constantly extending. Our work conforms to best practices with respect to the integration and interoperability of external services and provides process orchestration in a LCNC modeldriven fashion. We chose the CINCO product DIME and a bespoke tool developed in CINCO Cloud to serve as the underlying infrastructure for our LCNC platforms which address the requirements from our two application domains; public health and biomedical research. In the context of public health, an environment for building AI driven web applications for the automated evaluation of Web-based Health Information (WBHI). With respect to biomedical research, an AI driven workflow environment for the computational analysis of highly-plexed tissue images. We extended both underlying application stacks to support the various AI service functionality needed to address the requirements of the two application domains. The two case studies presented outline the methodology of developing these platforms through co-design with experts in the respective domains. Moving forward we anticipate we will increasingly re-use components which will reduce the development overhead for extending our existing platforms or developing new applications in similar domains

    Generating Optimal Decision Functions from Rule Specifications

    Get PDF
    In this paper we sketch an approach and a tool for rapid evaluation of large systems of weighted decision rules. The tool re-implements the patented miAamics approach, originally devised as a fast technique for multicriterial decision support. The weighted rules are used to express performance critical decision functions. MiAamics optimizes the function and generates its efficient implementation fully automatically. Being declarative, the rules allow experts to define rich sets of complex functions without being familiar with any general purpose programming language. The approach also lends itself to optimize existing decision functions that can be expressed in the form of these rules.The proposed approach first transforms the system of rules into an intermediate representation of Algebraic Decision Diagrams. From this data structure, we generate code in a variety of commonly used target programming languages.We illustrate the principle and tools on a small, easily comprehensible example and present results from experiments with large systems of randomly generated rules. The proposed representation is significantly faster to evaluate and often of smaller size than the original representation. Possible miAamics applications to machine learning concern reducing ensembles of classifiers and allowing for a much faster evaluation of these classification functions. It can also naturally be applied to large scale recommender systems where performance is key

    Component-Oriented Behavior Extraction for Autonomic System Design

    Get PDF

    Second-Order Value Numbering

    Get PDF
    We present second-order value numbering, a new optimization method for suppressing redundancy, in a version tailored to the application for optimizing the decision procedure of jMosel, a verification tool set for monadic second-order logic on strings (M2L(Str)). The method extends the well-known concept of value numbering to consider not merely values, but semantics transformers that can be efficiently pre-computed and help to avoid redundancy at the 2nd-order level. Since decision procedures for M2L are non-elementary, an optimization method like this can have a great impact on the execution time, even though our decision procedure comprises the analysis and optimization time for second-order value numbering. This is illustrated considering a parametric family of hardware circuits, where we observed a performance gain by a factor of 3. This result is surprising, as the design of these circuits exploits already structural similarity

    Editorial

    Get PDF

    DSL-based Interoperability and Integration in the Smart Manufacturing Digital Thread

    Get PDF
    In the industry 4.0 ecosystem, a Digital Thread connects the data and processes for smarter manufacturing. It provides an end to end integration of the various digital entities thus fostering interoperability, with the aim to design and deliver complex and heterogeneous interconnected systems. We develop a service oriented domain specific Digital Thread platform in a Smart Manufacturing research and prototyping context. We address the principles, architecture and individual aspects of a growing Digital Thread platform. It conforms to the best practices of coordination languages, integration and interoperability of external services from various platforms, and provides orchestration in a formal methods based, low-code and graphical model driven fashion. We chose the Cinco products DIME and Pyrus as the underlying IT platforms for our Digital Thread solution to serve the needs of the applications addressed: manufacturing analytics and predictive maintenance are in fact core capabilities for the success of smart manufacturing operations. In this regard, we extend the capabilities of these two platforms in the vertical domains of data persistence, IoT connectivity and analytics, to support the basic operations of smart manufacturing. External native DSLs provide the data and capability integrations through families of SIBs. The small examples constitute blueprints for the methodology, addressing the knowledge, terminology and concerns of domain stakeholders. Over time, we expect reuse to increase, reducing the new integration and development effort to a progressively smaller portion of the models and code needed for at least the most standard application

    Binary Decision Diagrams and Composite Classifiers for Analysis of Imbalanced Medical Datasets

    Get PDF
    oai:journal.ub.tu-berlin.de:article/1227Imbalanced datasets pose significant challenges in the development of accurate and robust classification models. In this research, we propose an approach that uses Binary Decision Diagrams (BDDs) to conduct pre-checks and suggest appropriate resampling techniques for imbalanced medical datasets as the application domain where we apply this technology is medical data collections. BDDs provide an efficient representation of the decision boundaries, enabling interpretability and providing valuable insights. In our experiments, we evaluate the proposed approach on various real-world imbalanced medical datasets, including Cerebralstroke dataset, Diabetes dataset and Sepsis dataset. Overall, our research contributes to the field of imbalanced medical dataset analysis by presenting a novel approach that uses BDDs and composite classifiers in a low-code/no-code environment. The results highlight the potential for our method to assist healthcare professionals in making informed decisions and improving patient outcomes in imbalanced medical datasets
    • …
    corecore